83 research outputs found

    NP-Hardness of Tensor Network Contraction Ordering

    Full text link
    We study the optimal order (or sequence) of contracting a tensor network with a minimal computational cost. We conclude 2 different versions of this optimal sequence: that minimize the operation number (OMS) and that minimize the time complexity (CMS). Existing results only shows that OMS is NP-hard, but no conclusion on CMS problem. In this work, we firstly reduce CMS to CMS-0, which is a sub-problem of CMS with no free indices. Then we prove that CMS is easier than OMS, both in general and in tree cases. Last but not least, we prove that CMS is still NP-hard. Based on our results, we have built up relationships of hardness of different tensor network contraction problems.Comment: Jianyu Xu and Hanwen Zhang are equal contributors. 10 pages (reference and appendix excluded), 20 pages in total, 6 figure

    Exploring Adversarial Attack in Spiking Neural Networks with Spike-Compatible Gradient

    Full text link
    Recently, backpropagation through time inspired learning algorithms are widely introduced into SNNs to improve the performance, which brings the possibility to attack the models accurately given Spatio-temporal gradient maps. We propose two approaches to address the challenges of gradient input incompatibility and gradient vanishing. Specifically, we design a gradient to spike converter to convert continuous gradients to ternary ones compatible with spike inputs. Then, we design a gradient trigger to construct ternary gradients that can randomly flip the spike inputs with a controllable turnover rate, when meeting all zero gradients. Putting these methods together, we build an adversarial attack methodology for SNNs trained by supervised algorithms. Moreover, we analyze the influence of the training loss function and the firing threshold of the penultimate layer, which indicates a "trap" region under the cross-entropy loss that can be escaped by threshold tuning. Extensive experiments are conducted to validate the effectiveness of our solution. Besides the quantitative analysis of the influence factors, we evidence that SNNs are more robust against adversarial attack than ANNs. This work can help reveal what happens in SNN attack and might stimulate more research on the security of SNN models and neuromorphic devices
    • …
    corecore